Weighted Probabilistic Opinion Pooling Based on Cross-Entropy

نویسنده

  • Vladimira Seckárová
چکیده

In this work we focus on opinion pooling in the finite group of sources introduced in [1]. This approach, heavily exploiting KullbackLeibler divergence (also known as cross-entropy), allows us to combine sources’ opinions given in probabilistic form, i.e. represented by the probability mass function (pmf). However, this approach assumes that sources are equally reliable with no preferences on, e.g., importance of a particular source. The discussion about the influence of the combination by preferences among sources (represented by weights) and numerical demonstration of the derived theory on an illustrative example form the core of this contribution.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Max-Pooling Dropout for Regularization of Convolutional Neural Networks

Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advoc...

متن کامل

Probabilistic opinion pooling generalized. Part two: the premise-based approach

How can several individuals’probability functions on a given -algebra of events be aggregated into a collective probability function? Classic approaches to this problem usually require ‘event-wise independence’: the collective probability for each event should depend only on the individuals’ probabilities for that event. In practice, however, some events may be ‘basic’ and others ‘derivative’, ...

متن کامل

Probabilistic opinion pooling generalized. Part one: general agendas

How can di¤erent individuals’probability assignments to some events be aggregated into a collective probability assignment? Classic results on this problem assume that the set of relevant events –the agenda –is a -algebra and is thus closed under disjunction (union) and conjunction (intersection). We drop this demanding assumption and explore probabilistic opinion pooling on general agendas. On...

متن کامل

Towards dropout training for convolutional neural networks

Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this...

متن کامل

Opinion Retrieval Experiments Using Generative Models: Experiments for the TREC 2007 Blog Track

Ranking blog posts that express opinions regarding a given topic should serve a critical function in helping users. We explored a couple of methods for opinion retrieval in the framework of probabilistic language models. The first method combines topic-relevance model and opinion-relevance model, at document level, that captures topic dependence of the opinion expressions. The second method com...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015